home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
CD ROM Paradise Collection 4
/
CD ROM Paradise Collection 4 1995 Nov.iso
/
science
/
neumap3.zip
/
NUMP.ZP
/
DATAM.HLP
< prev
next >
Wrap
Text File
|
1993-01-04
|
5KB
|
100 lines
Outline
1. Files Needed or Produced by Software
2. Training and Testing Data
a. Standard Format
b. Data Files Included With This Package
3. File and Neural Net Limitations
1. Files Needed or Produced by Software
a. MLP and functional link neural networks typically have three types
of files associated with them. These three types are:
(1) The network structure file. For the MLP, this file specifies the
number of network layers, the number of artificial neurons
(called units) in each layer, and the number of the first layer
which the third and fourth (if there is one) layers connect to.
For the functional link net, this file contains the network degree P
(usually an integer between 1 and 5), the number of network inputs N
and the number of outputs, and the dimension of the multinomial vector,
which is L = (N+P)!/(N!P!).
(2) The weight file, which gives the gains or coefficients along
paths connecting the various units.
(3) The training or testing data file, which gives example inputs
and outputs for network learning, or for testing after learning.
b. The network structure files have the extension "top". You can create
your own network structure files within the backpropagation, fast
training and functional link programs, if you want. Consider the MLP
network structure file, GLS.top shown below.
4
4 20 15 1
1 1 1
It has 4 layers. The first layer has 4 inputs, which means that
each training or testing pattern has 4 numbers. It has 20 units in
the first hidden layer, where "hidden" means that it is not an input
or output layer. It has 15 units in the second hidden layer.
The output layer has 1 unit. The last line of "1s" means that
layers 2, 3, and 4 connect up with layer 1, layers 1 and 2, and
layers 1, 2, and 3 respectively. This network is "fully connected",
meaning that each layer connects with all previous layers. Fully
connected networks are more powerful than and train faster than
non fully connected networks. The fully connected networks are
almost always smaller than non fully connected networks which
perform the same operation.
2. Training and Testing Data
a. Standard Format
All data files are in standard form. Standard form means that
the file is formatted, and that each pattern or vector has inputs
on the left and desired outputs on the right. You can type
out the files to examine them, and you can use these
files with other neural net software. For example, consider the
training data file, Max, part of which is shown below.
.5844768 .5359043 .6196933
.6196933
.1291312 .4173794 .3405759
.4173794
.0472856 .5994965 .5638752
.5994965
Each training pattern consists of three random numbers. The fourth
number, which is the desired network output, is the maximum of the
three inputs.
b. Data Files Included With This Package
The MAX data file, which corresponds to calculating the maximum
of 3 random numbers, has 300 patterns, each of which has 3 inputs
and 1 desired output.
The GLS data file has 300 training patterns, with 4 inputs and 1
desired output. Each pattern contains samples from times T,
T-6, T-12 and T-18, of the chaotic time series created
by the Mackey-Glass delay-difference equation, (a = 0.2,b = 0.1,
tau = 17), as Inputs. The sample at time T+6 is the desired output.
(Ref. Lapedes, A. & Farber, R.1987 Nonlinear Signal processing using
Neural Networks : Prediction & System modelling, Tech. Rep.
LA-UR-87-2662, Los Alamos National Laboratory, Los Alamos, NM.)
The twod.tra data file has 1,768 patterns with 8 inputs and 7 desired
outputs. The data comes from a remote sensing problem.
3. File and Neural Net Limitations
There is no limitation on data file size.
MLP neural nets are limited to 40 or fewer units in each layer,
including the input layer, one or two hidden layers, and the output
layer.
Functional link networks are limited to 40 inputs, 20 outputs, and
5th degree.
Conventional clustering and self-organizing map clustering are limited
to 32 elements per vector and 2,048 clusters. There is no limit on
the number of input patterns.